Skip to content

Conversation

@PatKamin
Copy link
Contributor

@PatKamin PatKamin commented Nov 19, 2025

@PatKamin PatKamin force-pushed the patkamin/core-preset branch from 24d3f10 to 2107bed Compare November 20, 2025 09:41
@PatKamin PatKamin marked this pull request as ready for review November 20, 2025 12:50
@PatKamin PatKamin requested a review from a team as a code owner November 20, 2025 12:50
@PatKamin PatKamin force-pushed the patkamin/core-preset branch from 2107bed to 2e00d9b Compare November 20, 2025 12:55
* If creating a new category, create a new `Suite` class inheriting from `benches.base.Suite`. Implement `name()` and `benchmarks()`. Add necessary `setup()` if the suite requires shared setup. Add group metadata via `additional_metadata()` if needed.
3. **Register Suite:** Import and add your new `Suite` instance to the `suites` list in `main.py`.
4. **Add to Presets:** If adding a new suite, add its `name()` to the relevant lists in `presets.py` (e.g., "Full", "Normal") so it runs with those presets. Update `README.md` and benchmarking workflow to include the new suite in presets' description/choices.
4. **Add to Presets:** If adding a new suite, add its `name()` to the relevant lists in `presets.py` (e.g., "Full", "Normal") so it runs with those presets. Update `README.md` and benchmarking workflow to include the new suite in presets' description/choices. Don't forget to create a new Suite object in `main.py`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the new sentence is literally the point no 3. above. So, perhaps re-phrase it a little here, e.g.:
... description/choices. If you're only adding new preset, don't forget ...


class ComputeBenchCoreSuite(ComputeBench):
"""
A suite for core compute benchmarks scenarios for quick runs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't we expect 100x more runs here?

Add 'Core' Compute Benchmarks preset with SubmitKernel scenarios with
the following parameters:
- time measurement
- all runtimes
- out of order/in order queue
- with/without measuring completion time
- with/without using events.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants